This notebook provides an analysis of On-Time Flight Performance and Departure Delays data using GraphFrames for Apache Spark.

Source Data:
References:
Connecting Apache Spark to Azure Cosmos DB accelerates your ability to solve your fast moving Data Sciences problems where your data can be quickly persisted and retrieved using Azure Cosmos DB's DocumentDB API. With the Spark to Cosmos DB conector, you can more easily solve scenarios including (but not limited to) blazing fast IoT scenarios, update-able columns when performing analytics, push-down predicate filtering, and performing advanced analytics to data sciences against your fast changing data against a geo-replicated managed document store with guaranteed SLAs for consistency, availability, low latency, and throughput.
The Spark to Cosmos DB connector utilizes the Azure DocumentDB Java SDK will utilize the following flow:

The data flow is as follows:
%%configure
{ "name":"Spark-to-Cosmos_DB_Connector",
"executorMemory": "8G",
"executorCores": 2,
"numExecutors": 2,
"driverCores": 2,
"jars": ["wasb:///example/jars/0.0.3c/azure-documentdb-1.10.0.jar","wasb:///example/jars/0.0.3c/azure-cosmosdb-spark-0.0.3-SNAPSHOT.jar"],
"conf": {
"spark.jars.packages": "graphframes:graphframes:0.5.0-spark2.1-s_2.11",
"spark.jars.excludes": "org.scala-lang:scala-reflect"
}
}
# Connection
flightsConfig = {
"Endpoint" : "https://doctorwho.documents.azure.com:443/",
"Masterkey" : "xWpfqUBioucC2YkWV6uHVhgZtsPIjIVmE4VDPyNYnw2QUazvCHm3rnn9AeSgglLOT3yfjCR5YbLeh5MCc3aKNw==",
"Database" : "DepartureDelays",
"preferredRegions" : "Central US",
"Collection" : "flights_pcoll",
"SamplingRatio" : "1.0",
"schema_samplesize" : "1000",
"query_pagesize" : "2147483647",
"query_custom" : "SELECT c.date, c.delay, c.distance, c.origin, c.destination FROM c"
}
flights = spark.read.format("com.microsoft.azure.cosmosdb.spark").options(**flightsConfig).load()
flights.count()
flights.cache()
flights.createOrReplaceTempView("flights")
# Set File Paths
airportsnaFilePath = "wasb://data@doctorwhostore.blob.core.windows.net/airport-codes-na.txt"
# Obtain airports dataset
airportsna = spark.read.csv(airportsnaFilePath, header='true', inferSchema='true', sep='\t')
airportsna.createOrReplaceTempView("airports")
%%sql
select count(1) from flights where origin = 'LAS'
%%sql
select concat(concat((dense_rank() OVER (PARTITION BY 1 ORDER BY TotalDelays DESC)-1), '. '), destination) as destination, TotalDelays
from (
select a.city as destination, sum(f.delay) as TotalDelays, count(1) as Trips
from flights f
join airports a
on a.IATA = f.destination
where f.origin = 'LAS'
and f.delay > 0
group by a.city
order by sum(delay) desc limit 10
) a
%%sql
select a.city as destination, percentile_approx(f.delay, 0.5) as median_delay
from flights f
join airports a
on a.IATA = f.destination
where f.origin = 'LAS'
group by a.city
order by percentile_approx(f.delay, 0.5)
Using GraphFrames for Apache Spark to run degree and motif queries against Cosmos DB
# Build `departureDelays` DataFrame
departureDelays = spark.sql("select cast(f.date as int) as tripid, cast(concat(concat(concat(concat(concat(concat('2014-', concat(concat(substr(cast(f.date as string), 1, 2), '-')), substr(cast(f.date as string), 3, 2)), ' '), substr(cast(f.date as string), 5, 2)), ':'), substr(cast(f.date as string), 7, 2)), ':00') as timestamp) as `localdate`, cast(f.delay as int), cast(f.distance as int), f.origin as src, f.destination as dst, o.city as city_src, d.city as city_dst, o.state as state_src, d.state as state_dst from flights f join airports o on o.iata = f.origin join airports d on d.iata = f.destination")
# Create Temporary View and cache
departureDelays.createOrReplaceTempView("departureDelays")
departureDelays.cache()
# Note, ensure you have already installed the GraphFrames spack-package
import os
sc.addPyFile(os.path.expanduser('./graphframes_graphframes-0.5.0-spark2.1-s_2.11.jar'))
from pyspark.sql.functions import *
from graphframes import *
# Create Vertices (airports) and Edges (flights)
tripVertices = airportsna.withColumnRenamed("IATA", "id").distinct()
tripEdges = departureDelays.select("tripid", "delay", "src", "dst", "city_dst", "state_dst")
# Cache Vertices and Edges
tripEdges.cache()
tripVertices.cache()
# Create TripGraph
tripGraph = GraphFrame(tripVertices, tripEdges)
Note, the joins are there to see the city name instead of the IATA codes. The rank() code is there to help order the data correctly when viewed in Jupyter notebooks.
flightDelays = tripGraph.edges.filter("src = 'LAS' and delay > 0").groupBy("src", "dst").avg("delay").sort(desc("avg(delay)"))
flightDelays.createOrReplaceTempView("flightDelays")
%%sql
select concat(concat((dense_rank() OVER (PARTITION BY 1 ORDER BY avg_delay DESC)-1), '. '), city) as destination,
avg_delay
from (
select a.city, `avg(delay)` as avg_delay
from flightDelays f
join airports a
on f.dst = a.iata
order by `avg(delay)`
desc limit 10
) s
It would take a relatively complicated SQL statement to calculate all of the edges to a single vertex, grouped by the vertices. Instead, we can use the graph degree method.
airportConnections = tripGraph.degrees.sort(desc("degree"))
airportConnections.createOrReplaceTempView("airportConnections")
%%sql
select concat(concat((dense_rank() OVER (PARTITION BY 1 ORDER BY degree DESC)-1), '. '), city) as destination,
degree
from (
select a.city, f.degree
from airportConnections f
join airports a
on a.iata = f.id
order by f.degree desc
limit 10
) a
filteredPaths = tripGraph.bfs(
fromExpr = "id = 'SEA'",
toExpr = "id = 'SJC'",
maxPathLength = 1)
filteredPaths.show()
SJC and BUF, i.e. direct flightSJC and BUF, i.e. all the different variations of flights between San Jose and Buffalo with only one stop oever in between? filteredPaths = tripGraph.bfs(
fromExpr = "id = 'SJC'",
toExpr = "id = 'BUF'",
maxPathLength = 1)
filteredPaths.show()
filteredPaths = tripGraph.bfs(
fromExpr = "id = 'SJC'",
toExpr = "id = 'BUF'",
maxPathLength = 2)
filteredPaths.show()
commonTransferPoint = filteredPaths.groupBy("v1.id", "v1.City").count().orderBy(desc("count"))
commonTransferPoint.createOrReplaceTempView("commonTransferPoint")
%%sql
select concat(concat((dense_rank() OVER (PARTITION BY 1 ORDER BY Trips DESC)-1), '. '), city) as destination,
Trips
degree
from (
select City, `count` as Trips from commonTransferPoint order by Trips desc limit 10
) a